Reliably planning fingertip grasps for multi-fingered hands lies as a key challenge for many tasks including tool use, insertion, and dexterous in-hand manipulation. This task becomes even more difficult when the robot lacks an accurate model of the object to be grasped. Tactile sensing offers a promising approach to account for uncertainties in object shape. However, current robotic hands tend to lack full tactile coverage. As such, a problem arises of how to plan and execute grasps for multi-fingered hands such that contact is made with the area covered by the tactile sensors. To address this issue, we propose an approach to grasp planning that explicitly reasons about where the fingertips should contact the estimated object surface while maximizing the probability of grasp success. Key to our method's success is the use of visual surface estimation for initial planning to encode the contact constraint. The robot then executes this plan using a tactile-feedback controller that enables the robot to adapt to online estimates of the object's surface to correct for errors in the initial plan. Importantly, the robot never explicitly integrates object pose or surface estimates between visual and tactile sensing, instead it uses the two modalities in complementary ways. Vision guides the robots motion prior to contact; touch updates the plan when contact occurs differently than predicted from vision. We show that our method successfully synthesises and executes precision grasps for previously unseen objects using surface estimates from a single camera view. Further, our approach outperforms a state of the art multi-fingered grasp planner, while also beating several baselines we propose.
translated by 谷歌翻译
Robots operating in human environments must be able to rearrange objects into semantically-meaningful configurations, even if these objects are previously unseen. In this work, we focus on the problem of building physically-valid structures without step-by-step instructions. We propose StructDiffusion, which combines a diffusion model and an object-centric transformer to construct structures out of a single RGB-D image based on high-level language goals, such as "set the table." Our method shows how diffusion models can be used for complex multi-step 3D planning tasks. StructDiffusion improves success rate on assembling physically-valid structures out of unseen objects by on average 16% over an existing multi-modal transformer model, while allowing us to use one multi-task model to produce a wider range of different structures. We show experiments on held-out objects in both simulation and on real-world rearrangement tasks. For videos and additional results, check out our website: http://weiyuliu.com/StructDiffusion/.
translated by 谷歌翻译
物体很少在人类环境中孤立地坐着。因此,我们希望我们的机器人来推理多个对象如何相互关系,以及这些关系在机器人与世界互动时可能会发生变化。为此,我们提出了一个新型的图形神经网络框架,用于多对象操纵,以预测对机器人行动的影响如何变化。我们的模型在部分视图点云上运行,可以推理操作过程中动态交互的多个对象。通过在学习的潜在图嵌入空间中学习动态模型,我们的模型使多步规划可以达到目标目标关系。我们展示了我们的模型纯粹是在模拟中训练的,可以很好地传输到现实世界。我们的计划器使机器人能够使用推送和拾取和地点技能重新排列可变数量的对象。
translated by 谷歌翻译
3D姿势估计对于分析和改善人体机器人相互作用的人体工程学和降低肌肉骨骼疾病的风险很重要。基于视觉的姿势估计方法容易出现传感器和模型误差以及遮挡,而姿势估计仅来自相互作用的机器人的轨迹,却遭受了模棱两可的解决方案。为了从两种方法的优势中受益并改善了它们的弊端,我们引入了低成本,非侵入性和遮挡刺激性多感应3D姿势估计算法中的物理人类手机相互作用。我们在单个相机上使用openpose的2D姿势,以及人类执行任务时相互作用的机器人的轨迹。我们将问题建模为部分观察的动力学系统,并通过粒子滤波器推断3D姿势。我们介绍了远程操作的工作,但可以将其推广到其他人类机器人互动的其他应用。我们表明,我们的多感官系统比仅使用机器人的轨迹仅使用openpose或姿势估计的姿势估计来更好地解决人运动冗余。与金标准运动捕获姿势相比,这将提高估计姿势的准确性。此外,当使用Rula评估工具进行姿势评估时,我们的方法也比其他单一感觉方法更好。
translated by 谷歌翻译
尽管已显示触觉皮肤可用于检测机器人臂及其环境之间的碰撞,但并未广泛用于改善机器人抓握和手持操作。我们提出了一种新型的传感器设计,用于覆盖现有的多指机器人手。我们在台式实验中使用织物和抗静态泡沫底物分析了四种不同的压电材料的性能。我们发现,尽管压电泡沫被设计为包装材料,而不是用作传感底物,但它的性能与专门为此目的设计的织物相当。尽管这些结果证明了压电泡沫对触觉传感应用的潜力,但它们并未完全表征这些传感器在机器人操作中使用的功效。因此,我们使用低密度泡沫底物来开发可扩展的触觉皮肤,该皮肤可以连接到机器人手的手掌上。我们使用该传感器展示了几项机器人操纵任务,以显示其可靠地检测和本地化接触的能力,并在掌握和运输任务期间分析接触模式。我们的项目网站提供了有关传感器开发和分析中使用的所有材料,软件和数据的详细信息:https://sites.google.com/gcloud.utah.edu/piezoresistive-tactile-sensing/。
translated by 谷歌翻译
Deep learning-based 3D human pose estimation performs best when trained on large amounts of labeled data, making combined learning from many datasets an important research direction. One obstacle to this endeavor are the different skeleton formats provided by different datasets, i.e., they do not label the same set of anatomical landmarks. There is little prior research on how to best supervise one model with such discrepant labels. We show that simply using separate output heads for different skeletons results in inconsistent depth estimates and insufficient information sharing across skeletons. As a remedy, we propose a novel affine-combining autoencoder (ACAE) method to perform dimensionality reduction on the number of landmarks. The discovered latent 3D points capture the redundancy among skeletons, enabling enhanced information sharing when used for consistency regularization. Our approach scales to an extreme multi-dataset regime, where we use 28 3D human pose datasets to supervise one model, which outperforms prior work on a range of benchmarks, including the challenging 3D Poses in the Wild (3DPW) dataset. Our code and models are available for research purposes.
translated by 谷歌翻译
Imitation learning (IL) is a simple and powerful way to use high-quality human driving data, which can be collected at scale, to identify driving preferences and produce human-like behavior. However, policies based on imitation learning alone often fail to sufficiently account for safety and reliability concerns. In this paper, we show how imitation learning combined with reinforcement learning using simple rewards can substantially improve the safety and reliability of driving policies over those learned from imitation alone. In particular, we use a combination of imitation and reinforcement learning to train a policy on over 100k miles of urban driving data, and measure its effectiveness in test scenarios grouped by different levels of collision risk. To our knowledge, this is the first application of a combined imitation and reinforcement learning approach in autonomous driving that utilizes large amounts of real-world human driving data.
translated by 谷歌翻译
Hawkes processes have recently risen to the forefront of tools when it comes to modeling and generating sequential events data. Multidimensional Hawkes processes model both the self and cross-excitation between different types of events and have been applied successfully in various domain such as finance, epidemiology and personalized recommendations, among others. In this work we present an adaptation of the Frank-Wolfe algorithm for learning multidimensional Hawkes processes. Experimental results show that our approach has better or on par accuracy in terms of parameter estimation than other first order methods, while enjoying a significantly faster runtime.
translated by 谷歌翻译
Importance: Social determinants of health (SDOH) are known to be associated with increased risk of suicidal behaviors, but few studies utilized SDOH from unstructured electronic health record (EHR) notes. Objective: To investigate associations between suicide and recent SDOH, identified using structured and unstructured data. Design: Nested case-control study. Setting: EHR data from the US Veterans Health Administration (VHA). Participants: 6,122,785 Veterans who received care in the US VHA between October 1, 2010, and September 30, 2015. Exposures: Occurrence of SDOH over a maximum span of two years compared with no occurrence of SDOH. Main Outcomes and Measures: Cases of suicide deaths were matched with 4 controls on birth year, cohort entry date, sex, and duration of follow-up. We developed an NLP system to extract SDOH from unstructured notes. Structured data, NLP on unstructured data, and combining them yielded seven, eight and nine SDOH respectively. Adjusted odds ratios (aORs) and 95% confidence intervals (CIs) were estimated using conditional logistic regression. Results: In our cohort, 8,821 Veterans committed suicide during 23,725,382 person-years of follow-up (incidence rate 37.18 /100,000 person-years). Our cohort was mostly male (92.23%) and white (76.99%). Across the six common SDOH as covariates, NLP-extracted SDOH, on average, covered 84.38% of all SDOH occurrences. All SDOH, measured by structured data and NLP, were significantly associated with increased risk of suicide. The SDOH with the largest effects was legal problems (aOR=2.67, 95% CI=2.46-2.89), followed by violence (aOR=2.26, 95% CI=2.11-2.43). NLP-extracted and structured SDOH were also associated with suicide. Conclusions and Relevance: NLP-extracted SDOH were always significantly associated with increased risk of suicide among Veterans, suggesting the potential of NLP in public health studies.
translated by 谷歌翻译
Multilevel Stein variational gradient descent is a method for particle-based variational inference that leverages hierarchies of approximations of target distributions with varying costs and fidelity to computationally speed up inference. This work provides a cost complexity analysis of multilevel Stein variational gradient descent that applies under milder conditions than previous results, especially in discrete-in-time regimes and beyond the limited settings where Stein variational gradient descent achieves exponentially fast convergence. The analysis shows that the convergence rate of Stein variational gradient descent enters only as a constant factor for the cost complexity of the multilevel version, which means that the costs of the multilevel version scale independently of the convergence rate of Stein variational gradient descent on a single level. Numerical experiments with Bayesian inverse problems of inferring discretized basal sliding coefficient fields of the Arolla glacier ice demonstrate that multilevel Stein variational gradient descent achieves orders of magnitude speedups compared to its single-level version.
translated by 谷歌翻译